Disentangling Generative Factors of Physical Fields Using Variational Autoencoders
نویسندگان
چکیده
The ability to extract generative parameters from high-dimensional fields of data in an unsupervised manner is a highly desirable yet unrealized goal computational physics. This work explores the use variational autoencoders for non-linear dimension reduction with specific aim disentangling low-dimensional latent variables identify independent physical that generated data. A disentangled decomposition interpretable, and can be transferred variety tasks including modeling, design optimization, probabilistic reduced order modelling. major emphasis this characterize disentanglement using VAEs while minimally modifying classic VAE loss function (i.e., Evidence Lower Bound) maintain high reconstruction accuracy. landscape characterized by over-regularized local minima which surround solutions. We illustrate comparisons between entangled representations juxtaposing learned distributions true factors model porous flow problem. Hierarchical priors are shown facilitate learning representations. regularization unaffected rotation when training rotationally-invariant priors, thus non-rotationally-invariant aids capturing properties factors, improving disentanglement. Finally, it semi-supervised - accomplished labeling small number samples ( O (1%))–results accurate consistently learned.
منابع مشابه
Green Generative Modeling: Recycling Dirty Data using Recurrent Variational Autoencoders
This paper explores two useful modifications of the recent variational autoencoder (VAE), a popular deep generative modeling framework that dresses traditional autoencoders with probabilistic attire. The first involves a specially-tailored form of conditioning that allows us to simplify the VAE decoder structure while simultaneously introducing robustness to outliers. In a related vein, a secon...
متن کاملAdversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks
Variational Autoencoders (VAEs) are expressive latent variable models that can be used to learn complex probability distributions from training data. However, the quality of the resulting model crucially relies on the expressiveness of the inference model. We introduce Adversarial Variational Bayes (AVB), a technique for training Variational Autoencoders with arbitrarily expressive inference mo...
متن کاملDisentangling Factors of Variation via Generative Entangling
Here we propose a novel model family with the objective of learning to disentangle the factors of variation in data. Our approach is based on the spike-and-slab restricted Boltzmann machine which we generalize to include higher-order interactions among multiple latent variables. Seen from a generative perspective, the multiplicative interactions emulates the entangling of factors of variation. ...
متن کاملImage Tranformation Using Variational Autoencoders
The way data are stored in a computer is definitively not the most intelligible approach that one can think about even though it makes computation and communication very convenient. This issue is essentially equivalent to dimensionality reduction problem under the assumption that the data can be embedded into a low-dimensional smooth manifold (Olah [2014]). We have seen couple of examples in th...
متن کاملA Generative Model For Zero Shot Learning Using Conditional Variational Autoencoders
Zero shot learning in Image Classification refers to the setting where images from some novel classes are absent in the training data but other information such as natural language descriptions or attribute vectors of the classes are available. This setting is important in the real world since one may not be able to obtain images of all the possible classes at training. While previous approache...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Frontiers in Physics
سال: 2022
ISSN: ['2296-424X']
DOI: https://doi.org/10.3389/fphy.2022.890910